Prune the Convolutional Neural Networks with Sparse Shrink
نویسندگان
چکیده
منابع مشابه
Prune the Convolutional Neural Networks with Sparse Shrink
Nowadays, it is still difficult to adapt Convolutional Neural Network (CNN) based models for deployment on embedded devices. The heavy computation and large memory footprint of CNN models become the main burden in real application. In this paper, we propose a “Sparse Shrink” algorithm to prune an existing CNN model. By analyzing the importance of each channel via sparse reconstruction, the algo...
متن کاملSparse Diffusion-Convolutional Neural Networks
The predictive power and overall computational efficiency of Diffusionconvolutional neural networks make them an attractive choice for node classification tasks. However, a naive dense-tensor-based implementation of DCNNs leads to O(N) memory complexity which is prohibitive for large graphs. In this paper, we introduce a simple method for thresholding input graphs that provably reduces memory r...
متن کاملSpatially-sparse convolutional neural networks
Convolutional neural networks (CNNs) perform well on problems such as handwriting recognition and image classification. However, the performance of the networks is often limited by budget and time constraints, particularly when trying to train deep networks. Motivated by the problem of online handwriting recognition, we developed a CNN for processing spatially-sparse inputs; a character drawn w...
متن کاملSparse 3D convolutional neural networks
We have implemented a convolutional neural network designed for processing sparse three-dimensional input data. The world we live in is three dimensional so there are a large number of potential applications including 3D object recognition and analysis of space-time objects. In the quest for efficiency, we experiment with CNNs on the 2D triangular-lattice and 3D tetrahedral-lattice. 1 Convoluti...
متن کاملLearning to Prune Filters in Convolutional Neural Networks
Many state-of-the-art computer vision algorithms use large scale convolutional neural networks (CNNs) as basic building blocks. These CNNs are known for their huge number of parameters, high redundancy in weights, and tremendous computing resource consumptions. This paper presents a learning algorithm to simplify and speed up these CNNs. Specifically, we introduce a “try-and-learn” algorithm to...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Electronic Imaging
سال: 2017
ISSN: 2470-1173
DOI: 10.2352/issn.2470-1173.2017.6.mobmu-306